An efficient improvement of the Newton method for solving nonconvex optimization problems
نویسندگان
چکیده مقاله:
Newton method is one of the most famous numerical methods among the line search methods to minimize functions. It is well known that the search direction and step length play important roles in this class of methods to solve optimization problems. In this investigation, a new modification of the Newton method to solve unconstrained optimization problems is presented. The significant merit of the proposed method is that the step length $alpha_k$ at each iteration is equal to 1. Additionally, the convergence analysis for this iterative algorithm is established under suitable conditions. Some illustrative examples are provided to show the validity and applicability of the presented method and a comparison is made with several other existing methods.
منابع مشابه
An Efficient Neurodynamic Scheme for Solving a Class of Nonconvex Nonlinear Optimization Problems
By p-power (or partial p-power) transformation, the Lagrangian function in nonconvex optimization problem becomes locally convex. In this paper, we present a neural network based on an NCP function for solving the nonconvex optimization problem. An important feature of this neural network is the one-to-one correspondence between its equilibria and KKT points of the nonconvex optimizatio...
متن کاملQuasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization
Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.
متن کاملAn inexact Newton method for nonconvex equality constrained optimization
We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows for the presence o...
متن کاملAn efficient approach for solving layout problems
This paper offers an approach that could be useful for diverse types of layout problems or even area allocation problems. By this approach there is no need to large number of discrete variables and only by few continues variables large-scale layout problems could be solved in polynomial time. This is resulted from dividing area into discrete and continuous dimensions. Also defining decision var...
متن کاملAn Efficient Optimization Method for Solving Unsupervised Data Classification Problems
Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has ...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملمنابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ذخیره در منابع من قبلا به منابع من ذحیره شده{@ msg_add @}
عنوان ژورنال
دوره 7 شماره 1
صفحات 69- 85
تاریخ انتشار 2019-01-01
با دنبال کردن یک ژورنال هنگامی که شماره جدید این ژورنال منتشر می شود به شما از طریق ایمیل اطلاع داده می شود.
میزبانی شده توسط پلتفرم ابری doprax.com
copyright © 2015-2023